-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-10271][test] Add Spark QA functional and performance cases #10564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Jenny Liu <[email protected]>
|
/bot run |
📝 WalkthroughWalkthroughThis PR extends the integration test suite with support for new LLM models (Llama 3.3 Nemotron, DeepSeek R1, Gemma 3, Qwen3, Qwen2.5-VL) by updating model path mappings, adding parametrized test cases with dynamic memory expectations, and reorganizing performance test configurations into a YAML-based specification format. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/integration/defs/perf/test_perf.py (1)
95-96: Consider removing trailing slash for consistency.Line 96 has a trailing slash (
"DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B/"), while most otherMODEL_PATH_DICTentries (e.g., line 94:"DeepSeek-R1/DeepSeek-R1-Distill-Qwen-32B") do not. Whileos.path.join()typically handles this, maintaining consistency reduces potential path-handling edge cases.🔧 Suggested fix
- "deepseek_r1_distill_llama_70b": - "DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B/", + "deepseek_r1_distill_llama_70b": + "DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B",
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
tests/integration/defs/perf/test_perf.pytests/integration/defs/test_e2e.pytests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_func.txttests/integration/test_lists/qa/llm_digits_perf.txttests/integration/test_lists/qa/llm_digits_perf.yml
💤 Files with no reviewable changes (1)
- tests/integration/test_lists/qa/llm_digits_perf.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g.,some_file.py)
Python classes should use PascalCase (e.g.,class SomeClass)
Python functions and methods should use snake_case (e.g.,def my_awesome_function():)
Python local variables should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL)
Python constants should use upper snake_case (e.g.,MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format"""<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic
Files:
tests/integration/defs/test_e2e.pytests/integration/defs/perf/test_perf.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification
Files:
tests/integration/defs/test_e2e.pytests/integration/defs/perf/test_perf.py
🧠 Learnings (10)
📓 Common learnings
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_perf.ymltests/integration/test_lists/qa/llm_digits_func.txttests/integration/defs/test_e2e.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_perf.ymltests/integration/test_lists/qa/llm_digits_func.txttests/integration/defs/test_e2e.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_perf.ymltests/integration/test_lists/qa/llm_digits_func.txttests/integration/defs/test_e2e.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_func.txttests/integration/defs/test_e2e.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/integration/test_lists/qa/llm_digits_core.txttests/integration/test_lists/qa/llm_digits_perf.ymltests/integration/test_lists/qa/llm_digits_func.txttests/integration/defs/test_e2e.py
📚 Learning: 2025-08-13T11:07:11.772Z
Learnt from: Funatiq
Repo: NVIDIA/TensorRT-LLM PR: 6754
File: tests/integration/test_lists/test-db/l0_a30.yml:41-47
Timestamp: 2025-08-13T11:07:11.772Z
Learning: In TensorRT-LLM test configuration files like tests/integration/test_lists/test-db/l0_a30.yml, TIMEOUT values are specified in minutes, not seconds.
Applied to files:
tests/integration/test_lists/qa/llm_digits_perf.yml
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid model name from Mistral AI, distinct from the regular Mistral models. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".
Applied to files:
tests/integration/defs/perf/test_perf.py
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid and distinct model family from Mistral AI, separate from their regular Mistral models. Ministral 8B is specifically designed for edge computing and on-device applications, released in October 2024. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".
Applied to files:
tests/integration/defs/perf/test_perf.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (15)
tests/integration/test_lists/qa/llm_digits_perf.yml (2)
1-13: Verify hardware specification: aarch64 CPU and GB10 GPU wildcards.The condition specifies
cpu: aarch64(ARM-based architecture) and wildcard*gb10*, which appear to be DGX-Spark-specific. Confirm this is the intended target hardware and not a typo or copy-paste error.Additionally,
system_gpu_countrange of1to1(exactly 1 GPU) is very restrictive. Verify this constraint is intentional, as it excludes multi-GPU testing on DGX-Spark systems.
15-47: Verify test identifiers match perf/test_perf.py parametrization and model definitions.The 33 test cases reference models, variants (FP4/FP8/NVFP4/BF16), and specific benchmarking parameters. Confirm that:
- All model names (e.g.,
gpt_oss_20b_fp4,qwen3_8b_fp8,nemotron_nano_v2_nvfp4, etc.) are defined intest_perf.pywith matching path entries inMODEL_PATH_DICTandHF_MODEL_PATH.- The test parametrization string format matches the actual pytest parametrization in
perf/test_perf.py::test_perf.- All new models (Llama 3.3 Nemotron, Qwen3, Phi4, DeepSeek R1, Gemma 3, Qwen 2.5-VL) are properly wired into the test framework.
tests/integration/test_lists/qa/llm_digits_func.txt (1)
1-44: Verify test identifiers and model path mappings exist in test_e2e.py and accuracy test files.The functional test list references:
- 35
test_e2e.pyparametrized tests with specific model names and HF/project paths- 8 accuracy tests from
test_llm_api_pytorch.pyandtest_llm_api_pytorch_multimodal.pyConfirm that:
- All model/path pairs (e.g.,
GPT-OSS-20B-gpt_oss/gpt-oss-20b,Qwen3-30B-A3B_nvfp4_hf-Qwen3/saved_models_Qwen3-30B-A3B_nvfp4_hf) are defined intest_e2e.pymodel path mappings.- Test methods
test_ptp_quickstart_advanced,test_ptp_quickstart_multimodal_phi4mmexist with correct parametrization.- Accuracy test classes (
TestLlama3_1_8B,TestQwen2_5_VL_7B,TestQwen3_30B_A3B,TestPhi4MM) exist in the respective test files with corresponding test methods.- Model naming conventions are consistent across files (underscores vs camelCase in test names vs path parameters).
tests/integration/test_lists/qa/llm_digits_core.txt (2)
1-40: Verify test identifiers and model mappings exist in test_e2e.py and accuracy files.Similar to
llm_digits_func.txt, this core test list references parametrized tests. Confirm:
- All model/path pairs are defined in
test_e2e.py(e.g.,Llama3.1-8B-FP8-llama-3.1-model/Llama-3.1-8B-Instruct-FP8).- Test method
test_ptp_quickstart_advanced_eagle3exists and is properly parametrized for the GPT-OSS-120B Eagle3 variant (line 35).- Multimodal test method
test_ptp_quickstart_multimodal_phi4mmexists with correct parametrization for Phi4MM variants (lines 12-20).- Accuracy test classes and methods exist in
test_llm_api_pytorch.pyandtest_llm_api_pytorch_multimodal.py.
35-35: New test method: Verifytest_ptp_quickstart_advanced_eagle3implementation.This core list includes a test for Eagle3 optimization (
test_ptp_quickstart_advanced_eagle3), which appears to be a new or specialized test method. Ensure that:
- The method is properly implemented in
test_e2e.py.- Memory assertions or other Eagle3-specific validations are correctly configured.
- The test is integrated with the broader quickstart advanced test framework.
tests/integration/defs/test_e2e.py (4)
1905-1942: LGTM: Comprehensive test coverage expansion.The new test parameters appropriately extend coverage across multiple model families (Llama, Qwen, Phi, Nemotron) with various quantization levels (BF16, FP4, FP8, NVFP4). The pytest marks correctly gate tests based on GPU architecture requirements.
1946-1946: LGTM: Consistent with new Nemotron-Nano-v2 test parameter.Correctly extends the conditional branch to handle the newly added Nemotron-Nano-v2-nvfp4 model variant.
1974-1974: LGTM: Appropriate extension for Llama 3.3 70B variant.Correctly applies the same
max_num_tokensconstraint to Llama3.3-70B as Llama3.1-70B, which is appropriate given their similar size and memory footprint.
2093-2128: LGTM: Well-designed dynamic memory expectation pattern.The addition of dynamic
expected_memcomputation (lines 2103-2107) is a good improvement that makes the test more maintainable and extensible. The memory values (106.71 GiB for GPT-OSS-120B vs 25.2 GiB for Llama-3.1-8B) are reasonable given the respective model sizes, and the comments clearly document the expectations.tests/integration/defs/perf/test_perf.py (6)
72-73: LGTM: Consistent Nemotron Super v1.5 FP8 model addition.The new entry follows the established pattern for Nemotron model variants.
102-104: LGTM: Gemma 3 27B model variants added.The new entries follow the established pattern for Gemma models. The mixed case in quantization suffixes (
fp8vsFP4) likely reflects the actual directory structure.
116-125: LGTM: Comprehensive Qwen3 model family additions.The additions provide good coverage of Qwen3 model sizes (8B, 14B, 30B, 32B) with appropriate quantization variants (BF16, FP8, FP4/NVFP4). The path naming conventions are consistent with existing Qwen3 entries.
128-130: LGTM: Vision-language model variants appropriately categorized.The Qwen2.5-VL entries are correctly placed under the
multimodals/directory, consistent with the treatment of other vision-language models in this configuration.
149-166: LGTM: Phi-4 reasoning and multimodal variants well-structured.The additions appropriately distinguish between reasoning-focused models (lines 149-151) and multimodal models with separate image/audio configurations (lines 155-166). The quantization variants (BF16, FP8, FP4) provide comprehensive coverage for performance testing.
173-173: LGTM: Nemotron Nano v2 NVFP4 quantization variant added.The entry appropriately complements the BF16 variant and follows the established naming convention for NVFP4 quantized models.
|
PR_Github #31188 [ run ] triggered by Bot. Commit: |
…has the issue torch.AcceleratorError: CUDA error: an illegal instruction was encountered based on 1.2.0rc7 image Signed-off-by: Jenny Liu <[email protected]>
|
I updated the Qwen3-30B-A3B-NVFP4 path from Qwen3/saved_models_Qwen3-30B-A3B_nvfp4_hf to public huggingface Qwen3/nvidia-Qwen3-30B-A3B-NVFP4, because the old ones has the following issue, it can be passed after changing to huggingface model. |
|
PR_Github #31188 [ run ] completed with state
|
|
/bot run |
|
PR_Github #31247 [ run ] triggered by Bot. Commit: |
|
PR_Github #31247 [ run ] completed with state
|
|
LGTM, a couple tiny comments. Thanks @JennyLiu-nv |
Signed-off-by: Jenny Liu <[email protected]>
Thanks Faraz for all these comments, exclude the following one, all comments solved. |
|
/bot run |
|
PR_Github #31442 [ run ] triggered by Bot. Commit: |
|
PR_Github #31442 [ run ] completed with state
|
|
/bot run |
|
PR_Github #31479 [ run ] triggered by Bot. Commit: |
|
PR_Github #31479 [ run ] completed with state
|
…l server not synced Signed-off-by: Jenny Liu <[email protected]>
|
/bot run |
|
PR_Github #31497 [ run ] triggered by Bot. Commit: |
|
PR_Github #31497 [ run ] completed with state
|
|
/bot run |
|
PR_Github #31522 [ run ] triggered by Bot. Commit: |
|
PR_Github #31522 [ run ] completed with state
|
|
/bot run |
farazkh80
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes, LGTM.
|
PR_Github #31592 [ run ] triggered by Bot. Commit: |
|
PR_Github #31592 [ run ] completed with state |
Signed-off-by: Jenny Liu <[email protected]>
|
/bot run |
|
PR_Github #31637 [ run ] triggered by Bot. Commit: |
|
PR_Github #31637 [ run ] completed with state |
|
@ruodil please help to merge, since all the tested reviewed by @farazkh80 and @pamelap-nvidia , ci also passed. thanks a lot |
…VIDIA#10564) Signed-off-by: Jenny Liu <[email protected]> Co-authored-by: Jenny Liu <[email protected]> Signed-off-by: Daniil Kulko <[email protected]>
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Add QA perf and func cases for DGX-Spark
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.